摘要 :
With the development of the Internet of Things and devices continuing to scale, using cloud computing resources to process data in real-time is challenging. Edge computing technologies can improve real-time performance in processi...
展开
With the development of the Internet of Things and devices continuing to scale, using cloud computing resources to process data in real-time is challenging. Edge computing technologies can improve real-time performance in processing data. By introducing the FPGA into the computing node and using the dynamic reconfigurability of the FPGA, the FPGA-based edge node can increase the edge node capability. In this paper, a task-based collaborative method for an FPGA-based edge computing system is proposed in order to meet the collaboration among FPGA-based edge nodes, edge nodes, and the cloud. The modeling of the task includes two parts, task information and task-dependent file. Task information is used to describe the running information and dependency information required for the task execution. Task-dependent file contains the configuration bit-stream of FPGA in running of the task. By analyzing the task behavior, this paper builds four basic behaviors, analyzes the critical attributes of each behavior, and summarizes the task model suitable for FPGA-based edge nodes. Tasks with specific functions can be created by modifying different attributes of model nodes. Finally, the availability of the model and the task-based collaborative method are verified by simulation experiments. The experimental results that the task model proposed in this paper can meet cloud-edge collaboration in the FPGA-based edge computing environment.
收起
摘要 :
The existing federated structure protects data privacy with only a certain level of confidentiality, and it is difficult to resist the reconstruction of other clients' data by malicious participants inside the federated and the il...
展开
The existing federated structure protects data privacy with only a certain level of confidentiality, and it is difficult to resist the reconstruction of other clients' data by malicious participants inside the federated and the illegal manipulation by external attackers or interceptors on the shared information. Besides, the average fusion algorithm used in the cloud center is difficult to eliminate the negative impact of outliers on model updates, and it cannot handle and fuse the time delay or even packet loss that occurs in the information obtained from each local client promptly. Therefore, to make the federated learning (FL) mechanism with stronger privacy protection ability and security, while effectively avoiding the negative impact of outliers on the aggregation of model parameters. We innovatively establish multi-Level FL based on cloud-edge-client collaboration and outlier-tolerance for fault diagnosis. At first, we build a multi-level FL network framework based on the cloud-edge-client collaborative approach for restricted sharing of network parameters level by level without data communication. Then, the edge-side performs Euclidean metrics on the restricted shared model parameters uploaded to the primary edge by each client, and uses them to identify outliers to evaluate and weight them for outlier-tolerance; Then, an outlier-tolerance mechanism is designed based on a centralized Kalman filtering algorithm that is to adjust the modeling error weights adaptively; Lastly, the cloud center performs asynchronous aggregation on the model parameters uploaded asynchronously by the highest-level edge based on a sequential Kalman filtering algorithm and transmitted the optimal model parameters back along the original path. Finally, the effectiveness of the proposed method is verified on the collected dataset.
收起
摘要 :
Nowadays, IoT systems can better satisfy the service requirements of users with effectively utilizing edge computing resources. Designing an appropriate pricing scheme is critical for users to obtain the optimal computing resource...
展开
Nowadays, IoT systems can better satisfy the service requirements of users with effectively utilizing edge computing resources. Designing an appropriate pricing scheme is critical for users to obtain the optimal computing resources at a reasonable price and for service providers to maximize profits. This problem is complicated with incomplete information. The state-of-the-art solutions focus on the pricing game between a single service provider and users, which ignoring the competition among multiple edge service providers. To address this challenge, we design an edge-intelligent hierarchical dynamic pricing mechanism based on cloud-edge-client collaboration. We introduce an improved double-layer Stackelberg game model to describe the cloud-edge-client collaboration. Technically, we propose a novel pricing prediction algorithm based on double-label Radius K-nearest Neighbors, thereby reducing the number of invalid games to accelerate the game convergence. The experimental results show that our proposed mechanism effectively improves the quality of service for users and realizes the maximum benefit equilibrium for service providers, compared with the traditional pricing scheme. Our proposed mechanism is highly suitable for the IoT applications (e.g., intelligent agriculture or Internet of Vehicles), where there are multiple competing edge service providers for resource allocation.
收起
摘要 :
The research aims to reduce the network resource pressure on cloud centers (CC) and edge nodes, to improve the service quality and to optimize the network performance. In addition, it studies and designs a kind of edge-cloud colla...
展开
The research aims to reduce the network resource pressure on cloud centers (CC) and edge nodes, to improve the service quality and to optimize the network performance. In addition, it studies and designs a kind of edge-cloud collaboration framework based on the Internet of Things (IoT). First, raspberry pi (RP) card working machines are utilized as the working nodes, and a kind of edge- cloud collaboration framework is designed for edge computing. The framework consists mainly of three layers, including edge RP (ERP), monitoring & scheduling RP (MSRP), and CC. Among the three layers, collaborative communication can be realized between RPs and between RPs and CCs. Second, a kind of edge-cloud matching algorithm is proposed in the time delay constraint scenario. The research results obtained by actual task assignments demonstrate that the task time delay in face recognition on edge-cloud collaboration mode is the least among the three working modes, including edge only, CC only, and edge-CC collaboration modes, reaching only 12 s. Compared with that of CC running alone, the identification results of the framework rates on edge-cloud collaboration and CC modes are both more fluent than those on edge mode only, and real-time object detection can be realized. The total energy consumption of the unloading execution by system users continuously decreases with the increase in the number of users. It is assumed that the number of pieces of equipment in systems is 150, and the energy-saving rate of systems is affected by the frequency of task generation. The frequency of task generation increases with the corresponding reduction in the energy-saving rate of systems. Based on object detection as an example, the system energy consumption is decreased from 18 W to 16 W after the assignment of algorithms. The included framework improves the resource utility rate and reduces system energy consumption. In addition, it provides theoretical and practical references for the implementation of the edge-cloud collaboration framework.(c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
收起
摘要 :
As edge clouds become more widespread, it is important to study their impact on traditional application architectures, most importantly the separation of the data and control planes of traditional clients. We explore such impact u...
展开
As edge clouds become more widespread, it is important to study their impact on traditional application architectures, most importantly the separation of the data and control planes of traditional clients. We explore such impact using the virtualization of a Peer to -Peer (P2P) client as a case study. In this model, an end user accesses and controls the virtual P2P client application using a web browser and all P2P application-related control messages originate and terminate from the virtual P2P client deployed inside the remote server. The web browser running on the user device only manages download and upload of the P2P data packets. BitTorrent, as it is the most widely deployed P2P platform, is used to validate the feasibility and study the performance of our approach. We introduce a prototype that has been deployed in public cloud infrastructures. We present simulation results which show clear improvements in the use of user resources. Based on this experience we derive lessons on the challenges and benefits from such edge cloud-based deployments. (C) 2016 Elsevier B.V. All rights reserved.
收起
摘要 :
Due to the problem of high link load of edge cache and small storage space of edge server, a caching architecture by the collaborative of edge nodes and the cloud server is proposed. The content cache location is designed and opti...
展开
Due to the problem of high link load of edge cache and small storage space of edge server, a caching architecture by the collaborative of edge nodes and the cloud server is proposed. The content cache location is designed and optimized, which can be the content provider, cloud server (CS), and edge node (EN). In the proposed system, cloud servers collaborate with edge servers and the performance of content caching can be improved by coordinating caching on the cloud server or caching on the edge server. In this paper, a cloud-edge collaborative caching model based on the greedy algorithm is proposed, which includes the content caching model and collaborative caching model. Network architecture, file popularity estimation, link capacity, and other factors are considered in the model. Correspondingly, a cloud-edge collaborative cache algorithm based on a greedy algorithm is proposed. The related optimization problem is decomposed into the knapsack problem of cache layout in each layer, and then the greedy algorithm is used to solve the knapsack problem of cache placement and cooperative cache proposed in this paper. The affiliation between CS cache and EN caches in the layered architecture is improved and recognized. In the experimental results, the link load is reduced, the cache hit rate is improved by using the proposed method of edge caching, and it also has obvious advantages in the average end-to-end service delay.
收起
摘要 :
The power distribution network business gradually extends from the grid domain to the social service domain, and the new business keeps expanding. The edge device uses microservice architecture and container technology to realize ...
展开
The power distribution network business gradually extends from the grid domain to the social service domain, and the new business keeps expanding. The edge device uses microservice architecture and container technology to realize the processing of different services by one physical device. Although the power distribution network IoT with cloud-edge architecture has good scalability, scenarios with insufficient resources for edge devices may occur. In order to support the scheduling and collaborative processing of tasks under the resource-constrained scenario from the edge device, this paper proposes a cloud-edge collaborative online scheduling method for the distribution of station area tasks under the microservice architecture. The article models the characteristics of power tasks and their constraints in the cloud-edge containerized scenario, designs the priority policy and task assignment policy based on the cloud-edge scheduling mechanism of containerized power tasks, and schedules the tasks in real time by an improved online algorithm. Simulation results show that the algorithm proposed in this paper has high task execution efficiency, can improve the completion rate of important tasks with limited resources of edge devices, and improves system security through resource replacement.
收起
摘要 :
With the explosive growth of the Internet of Things (IoT), IoT devices generate massive amounts of data and demand, which poses a huge challenge to IoT devices with limited CPU computing capability and battery capacity. Due to the...
展开
With the explosive growth of the Internet of Things (IoT), IoT devices generate massive amounts of data and demand, which poses a huge challenge to IoT devices with limited CPU computing capability and battery capacity. Due to the dependency relationship in complex applications and network environments, effective offloading in such scenarios is complicated. In this paper, we address the problem of computation offloading with task dependency in the cloud-edge-end collaboration scenario including multi-user, multi-core edge servers, and a cloud server. We model multiple task dependencies using the directed acyclic graph (DAG) and formalize the offloading problem as a multi-objective mixed integer optimization problem. To solve this problem, a Task Priority and Deep Reinforcement learning-based Task Offloading algorithm (TPDRTO) is proposed. The task offloading decision is represented as a Markov Decision Process (MDP). Meanwhile, based on the task priority, an optimized Deep Reinforcement Learning (DRL) method with action-mask is proposed to leverage the computing resources of the cloud and edge servers and obtain the optimal policies of computation offloading. Experimental results show that the TPDRTO algorithm can effectively tradeoff and reduce the average energy consumption and time delay of IoT devices.
收起
摘要 :
The rapid development of the Internet of Things has put forward higher requirements for the processing capacity of the network. The adoption of cloud edge collaboration technology can make full use of computing resources and impro...
展开
The rapid development of the Internet of Things has put forward higher requirements for the processing capacity of the network. The adoption of cloud edge collaboration technology can make full use of computing resources and improve the processing capacity of the network. However, in the cloud edge collaboration technology, how to design a collaborative assignment strategy among different devices to minimize the system cost is still a challenging work. In this paper, a task collaborative assignment algorithm based on genetic algorithm and simulated annealing algorithm is proposed. Firstly, the task collaborative assignment framework of cloud edge collaboration is constructed. Secondly, the problem of task assignment strategy was transformed into a function optimization problem with the objective of minimizing the time delay and energy consumption cost. To solve this problem, a task assignment algorithm combining the improved genetic algorithm and simulated annealing algorithm was proposed, and the optimal task assignment strategy was obtained. Finally, the simulation results show that compared with the traditional cloud computing, the proposed method can improve the system efficiency by more than 25%.
收起
摘要 :
Emerging computing paradigms provide field-level service responses for users, for example, edge computing, fog computing, and MEC. Edge virtualization technologies represented by Docker can provide a platform-independent, low-reso...
展开
Emerging computing paradigms provide field-level service responses for users, for example, edge computing, fog computing, and MEC. Edge virtualization technologies represented by Docker can provide a platform-independent, low-resource-consumption operating environment for edge service. The image-pulling time of Docker is a crucial factor affecting the start-up speed of edge services. The layer reuse mechanism of native Docker cannot fully utilize the duplicate data of node local images. In this paper, we propose a chunk reuse mechanism (CRM), which effectively targets node-local duplicate data during container updates and reduces the volume of data transmission required for image building. We orchestrate the CRM process for cloud and remote-cloud nodes to ensure that the resource overhead from container update data preparation and image reconstruction is within an acceptable range. The experimental results show that the CRM proposed in this paper can effectively utilize the node local duplicate data in the synchronous update of containers in multiple nodes, reduce the volume of data transmission, and significantly improve container update efficiency.
收起